1,922 research outputs found

    Final Rises in task-oriented and conversational dialogue

    Get PDF

    Modelling Participant Affect in Meetings with Turn-Taking Features

    Get PDF
    This paper explores the relationship between turn-taking and meeting affect. To investigate this, we model post-meeting ratings of satisfaction, cohesion and leadership from participants of AMI corpus meetings using group and individual turn-taking features. The results indicate that participants gave higher satisfaction and cohesiveness ratings to meetings with greater group turn-taking freedom and individual very short utterance rates, while lower ratings were associated with more silence and speaker overlap. Besides broad applicability to satisfaction ratings, turn-taking freedom was found to be a better predictor than equality of speaking time when considering whether participants felt that everyone they had a chance to contribute. If we include dialogue act information, we see that substantive feedback type turns like assessments are more predictive of meeting affect than information giving acts or backchannels. This work highlights the importance of feedback turns and modelling group level activity in multiparty dialogue for understanding the social aspects of speech

    Everyone has an accent

    Get PDF

    Robotic Speech Synthesis: Perspectives on Interactions, Scenarios, and Ethics

    Full text link
    In recent years, many works have investigated the feasibility of conversational robots for performing specific tasks, such as healthcare and interview. Along with this development comes a practical issue: how should we synthesize robotic voices to meet the needs of different situations? In this paper, we discuss this issue from three perspectives: 1) the difficulties of synthesizing non-verbal and interaction-oriented speech signals, particularly backchannels; 2) the scenario classification for robotic voice synthesis; 3) the ethical issues regarding the design of robot voice for its emotion and identity. We present the findings of relevant literature and our prior work, trying to bring the attention of human-robot interaction researchers to design better conversational robots in the future.Comment: Accepted for the HRI 2022 Workshop "Robo-Identity: Exploring Artificial Identity and Emotion via Speech Interactions" at HRI 2022, 7 March 202

    LPath+: A First-Order Complete Language for Linguistic Tree Query

    Get PDF
    PACLIC 19 / Taipei, taiwan / December 1-3, 200

    The prosody of presupposition projection in naturally-occurring utterances

    Get PDF
    In experimental studies, prosodically-marked pragmatic focus has been found to influence the projection of factive presuppositions of utterances like these parents didn’t know the kid was gone (Cummins and Rohde, 2015; Tonhauser, 2016; Dj¨arv and Bacovcin, 2017), supporting question-based analyses of projection (i.a., Abrus´an, 2011; Abrus´an, 2016; Simons et al., 2017; Beaver et al., 2017). However, no prior work has explored whether this effect extends to naturally-occurring utterances. In a large set of naturally-occurring utterances, we find that prosodically-marked focus influences projection in utterances with factive embedding predicates, but not those with non-factive predicates. We argue that our findings support an account where lexical semantics of the predicate contributes to projection to the extent that they admit QUD alternatives that can be assumed to entail the content of the complement

    Recognising emotions in spoken dialogue with hierarchically fused acoustic and lexical features

    Get PDF

    Polarity and Intensity: the Two Aspects of Sentiment Analysis

    Get PDF
    Current multimodal sentiment analysis frames sentiment score prediction as a general Machine Learning task. However, what the sentiment score actually represents has often been overlooked. As a measurement of opinions and affective states, a sentiment score generally consists of two aspects: polarity and intensity. We decompose sentiment scores into these two aspects and study how they are conveyed through individual modalities and combined multimodal models in a naturalistic monologue setting. In particular, we build unimodal and multimodal multi-task learning models with sentiment score prediction as the main task and polarity and/or intensity classification as the auxiliary tasks. Our experiments show that sentiment analysis benefits from multi-task learning, and individual modalities differ when conveying the polarity and intensity aspects of sentiment.Comment: Published at the First Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML) of ACL 201
    corecore